# Pruned model
Blockchainlabs 7B Merged Test2 4 Prune Sft 4bit DPO Orca
This is a small 7B-parameter LLM optimized for device-side use, pruned and trained with DPO
Large Language Model
Transformers English

B
alnrg2arg
18
2
Sheared LLaMA 1.3B Pruned
Sheared-LLaMA-1.3B-Pruned is a 1.3B parameter model pruned from Llama-2-7b, without continued pretraining, primarily used for studying pruning techniques and their impacts.
Large Language Model
Transformers

S
princeton-nlp
25
3
Featured Recommended AI Models